Skip to main content

Last Update: 2025/3/26

SenseFlow Chat API

The SenseFlow Chat API allows you to build conversational AI applications with context awareness. This is suitable for chatbots, virtual assistants, and other applications requiring conversation history.

Endpoints

Chat Message

POST https://platform.llmprovider.ai/v1/agent/chat-messages

Request Headers

HeaderValue
AuthorizationBearer YOUR_API_KEY
Content-Typeapplication/json

Request Body

ParameterTypeDescription
modelstringagent name
querystringUser input/question content
inputsobject(Optional) Key-value pairs of variables defined in the app
filesarray(Optional) Array of file objects for multimodal interactions
response_modestringResponse mode: streaming (recommended) or blocking
conversation_idstring(Optional) Conversation ID for continuing previous chats
userstringUnique identifier for the end user
Files Object Structure
FieldTypeDescription
typestringFile type (imagedocument, audio, video, custom)
transfer_methodstringremote_url or local_file
urlstringFile URL (when transfer_method is remote_url)
upload_file_idstringFile ID (when transfer_method is local_file)

Response

The response varies based on the response_mode:

  • For blocking mode: Returns a ChatResponse object
  • For streaming mode: Returns a stream of ChunkChatResponse objects
ChatResponse Structure
FieldTypeDescription
idstringMessage ID
conversation_idstringConversation ID
answerstringComplete response content
message_filesarrayArray of message files (for assistant responses)
metadataobjectMetadata information
created_atintegerMessage creation timestamp

Example Response (Blocking Mode)

{
"id": "msg_12345",
"conversation_id": "conv_12345",
"answer": "Based on the image, I can see...",
"message_files": [],
"metadata": {},
"created_at": 1705569239
}

Streaming Response Events

Each streaming chunk starts with data: and chunks are separated by \n\n. The events follow the same structure as the Completion API.

Streaming Response Events

Each streaming chunk starts with data: and chunks are separated by \n\n. Example:

data: {"event": "message", "task_id": "900bbd43-dc0b-4383-a372-aa6e6c414227", "message_id": "663c5084", "answer": "Hi", "created_at": 1705398420}\n\n

Different event types in the stream:

Event TypeDescriptionFields
messagetext chunk from LLMtask_id, message_id, answer, created_at
message_filea new file has created by toolid, type, belongs_to, url, conversation_id
message_endreceiving this event means streaming has endedtask_id, message_id, metadata, usage, retriever_resources
tts_messageSpeech synthesis chunktask_id, message_id, audio (base64), created_at
tts_message_endSpeech synthesis completiontask_id, message_id, audio (empty), created_at
message_replaceContent moderation replacementtask_id, message_id, answer, created_at
workflow_startedworkflow starts executiontask_id, workflow_run_id, event, data(id, workflow_id, sequence_number, created_at)
node_startednode execution startedtask_id, workflow_run_id, event, data(id, node_id, node_type, title, index, predecessor_node_id, inputs, created_at)
node_finishednode execution endedtask_id, workflow_run_id, event, data(id, node_id, node_type, title, index, predecessor_node_id, inputs, created_at)
workflow_finishedworkflow execution endedtask_id, workflow_run_id, event, data(id, workflow_id, status, outputs, error, elapsed_time, total_tokens, total_steps, created_at, finished_at)
errorStream error eventtask_id, message_id, status, code, message
pingKeep-alive ping (every 10s)-

Example Request

curl -X POST 'https://platform.llmprovider.ai/v1/agent/chat-messages' \
--header 'Authorization: Bearer $YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": ""
"query": "What can you tell me about this image?",
"files": [
{
"type": "image",
"transfer_method": "remote_url",
"url": "https://example.com/image.jpg"
}
],
"response_mode": "streaming",
"conversation_id": "conv_12345",
"user": "abc-123"
}'

Upload File

POST https://platform.llmprovider.ai/v1/agent/files/upload

Upload files (currently only supports images) for use in multimodal interactions. Supports png, jpg, jpeg, webp, and gif formats.

Request Headers

HeaderValue
AuthorizationBearer YOUR_API_KEY
Content-Typemultipart/form-data

Request Body Parameters

ParameterTypeDescription
modelstringagent name
filefileThe file to upload
userstringUser identifier (must match the message API user ID)

Response

FieldTypeDescription
iduuidFile ID
namestringFile name
sizeintegerFile size in bytes
extensionstringFile extension
mime_typestringFile MIME type
created_byuuidUploader ID
created_attimestampUpload timestamp
Example Response
{
"id": "72fa9618-8f89-4a37-9b33-7e1178a24a67",
"name": "example.png",
"size": 1024,
"extension": "png",
"mime_type": "image/png",
"created_by": 123,
"created_at": 1577836800
}

Example Request

curl -X POST 'https://api.dify.ai/v1/agent/files/upload' \
--header 'Authorization: Bearer {api_key}' \
--form 'file=@localfile;type=image/[png|jpeg|jpg|webp|gif] \
--form 'model='
--form 'user=abc-123'

Stop Response

POST https://platform.llmprovider.ai/v1/agent/chat-messages/:task_id/stop

Stop a streaming response. Only available for streaming mode.

Request Headers

HeaderValue
AuthorizationBearer YOUR_API_KEY
Content-Typeapplication/json

Path Parameters

ParameterTypeDescription
task_idstringTask ID obtained from the streaming response

Request Body Parameters

ParameterTypeRequiredDescription
userstringYesUser identifier (must match the message API user ID)
modelstringmodel name

Response

FieldTypeDescription
resultstringupload result
Example Response
{
"result": "success"
}

Example Request

curl -X POST 'https://platform.llmprovider.ai/v1/agent/chat-messages/task_123/stop' \
--header 'Authorization: Bearer $YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data-raw '{
"agent": "",
"user": "abc-123"
}'